2 research outputs found

    Object Detection and Classification in the Visible and Infrared Spectrums

    Get PDF
    The over-arching theme of this dissertation is the development of automated detection and/or classification systems for challenging infrared scenarios. The six works presented herein can be categorized into four problem scenarios. In the first scenario, long-distance detection and classification of vehicles in thermal imagery, a custom convolutional network architecture is proposed for small thermal target detection. For the second scenario, thermal face landmark detection and thermal cross-spectral face verification, a publicly-available visible and thermal face dataset is introduced, along with benchmark results for several landmark detection and face verification algorithms. Furthermore, a novel visible-to-thermal transfer learning algorithm for face landmark detection is presented. The third scenario addresses near-infrared cross-spectral periocular recognition with a coupled conditional generative adversarial network guided by auxiliary synthetic loss functions. Finally, a deep sparse feature selection and fusion is proposed to detect the presence of textured contact lenses prior to near-infrared iris recognition

    Visible-to-Thermal Transfer Learning for Facial Landmark Detection

    Get PDF
    There has been increasing interest in face recognition in the thermal infrared spectrum. A critical step in this process is face landmark detection. However, landmark detection in the thermal spectrum presents a unique set of challenges compared to in the visible spectrum: inherently lower spatial resolution due to longer wavelength, differences in phenomenology, and limited availability of labeled thermal face imagery for algorithm development and training. Thermal infrared imaging does have the advantage of being able to passively acquire facial heat signatures without the need for active or ambient illumination in low light and nighttime environments. In such scenarios, thermal imaging must operate by itself without corresponding/paired visible imagery. Mindful of this constraint, we propose visible-to-thermal parameter transfer learning using a coupled convolutional network architecture as a means to leverage visible face data when training a model for thermal-only face landmark detection. This differentiates our approach from models trained either solely on thermal images or models which require a fusion of visible and thermal images at test time. In this work, we implement and analyze four types of parameter transfer learning methods in the context of thermal face landmark detection: Siamese (shared) layers, Linear Layer Regularization (LLR), Linear Kernel Regularization (LKR), and Residual Parameter Transformations (RPT). These transfer learning approaches are compared against a baseline version of the network and an Active Appearance Model (AAM), both of which are trained only on thermal data. We achieve a 6.5% - 9.5% improvement on the DEVCOM ARL Multi-modal Thermal Face Dataset and a 4% improvement on the RWTH Aachen University Thermal Face Dataset over the baseline model. We show that LLR, LKR, and RPT all result in improved thermal face landmark detection performance compared to the baseline and AAM, demonstrating that transfer learning leveraging visible spectrum data improves thermal face landmarking
    corecore